159 research outputs found

    Defensive Dropout for Hardening Deep Neural Networks under Adversarial Attacks

    Full text link
    Deep neural networks (DNNs) are known vulnerable to adversarial attacks. That is, adversarial examples, obtained by adding delicately crafted distortions onto original legal inputs, can mislead a DNN to classify them as any target labels. This work provides a solution to hardening DNNs under adversarial attacks through defensive dropout. Besides using dropout during training for the best test accuracy, we propose to use dropout also at test time to achieve strong defense effects. We consider the problem of building robust DNNs as an attacker-defender two-player game, where the attacker and the defender know each others' strategies and try to optimize their own strategies towards an equilibrium. Based on the observations of the effect of test dropout rate on test accuracy and attack success rate, we propose a defensive dropout algorithm to determine an optimal test dropout rate given the neural network model and the attacker's strategy for generating adversarial examples.We also investigate the mechanism behind the outstanding defense effects achieved by the proposed defensive dropout. Comparing with stochastic activation pruning (SAP), another defense method through introducing randomness into the DNN model, we find that our defensive dropout achieves much larger variances of the gradients, which is the key for the improved defense effects (much lower attack success rate). For example, our defensive dropout can reduce the attack success rate from 100% to 13.89% under the currently strongest attack i.e., C&W attack on MNIST dataset.Comment: Accepted as conference paper on ICCAD 201

    Integration of Ideological and Political Education into the Civil Engineering Curriculum: A Case Study of the “Steel Bridge” Course at Southwest Jiaotong University’s Hope College

    Get PDF
    This research paper explores the successful integration of ideological and political education into the curriculum of the civil engineering program, focusing on the course “Steel Bridge” at Southwest Jiaotong University’s Hope College. The study outlines the course objectives, curriculum design, teaching strategies, and assesses the impact of ideological and political education on students’ comprehensive development. By examining the course’s teaching methods, content, and effectiveness, the paper aims to provide insights into the broader implementation of ideological and political education in engineering education. The findings reveal notable improvements in students’ political awareness, moral character, and overall competence through the infusion of ideological and political elements within the technical curriculum. This case study serves as a model for similar courses and contributes to the ongoing discourse on cultivating well-rounded engineers with a strong sense of social responsibility and ethical values

    Block Switching: A Stochastic Approach for Deep Learning Security

    Full text link
    Recent study of adversarial attacks has revealed the vulnerability of modern deep learning models. That is, subtly crafted perturbations of the input can make a trained network with high accuracy produce arbitrary incorrect predictions, while maintain imperceptible to human vision system. In this paper, we introduce Block Switching (BS), a defense strategy against adversarial attacks based on stochasticity. BS replaces a block of model layers with multiple parallel channels, and the active channel is randomly assigned in the run time hence unpredictable to the adversary. We show empirically that BS leads to a more dispersed input gradient distribution and superior defense effectiveness compared with other stochastic defenses such as stochastic activation pruning (SAP). Compared to other defenses, BS is also characterized by the following features: (i) BS causes less test accuracy drop; (ii) BS is attack-independent and (iii) BS is compatible with other defenses and can be used jointly with others.Comment: Accepted by AdvML19: Workshop on Adversarial Learning Methods for Machine Learning and Data Mining at KDD, Anchorage, Alaska, USA, August 5th, 2019, 5 page

    Information extraction to facilitate translation of natural language legislation

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2011.Cataloged from PDF version of thesis.Includes bibliographical references (p. 65-66).There is a large body of existing legislation and policies that govern how government organizations and corporations can share information. Since these rules are generally expressed in natural language, it is difficult and labor intensive to verify whether or not data sharing events are compliant with the relevant policies. This work aims to develop a natural language processing framework that automates significant portions of this translation process, so legal policies are more accessible to existing automated reasoning systems. Even though these laws are expressed in natural language, for this very specific domain, only a handful of sentence structures are actually used to convey logic. This structure can be exploited so that the program can automatically detect who the actor, action, object, and conditions are for each rule. In addition, once the structure of a rule is identified, similar rules can be presented to the user. If integrated into an authoring environment, this will allow the user to reuse previously translated rules as templates to translate novel rules more easily, independent of the target language for translation. A body of 315 real-world rules from 12 legal sources was collected and annotated for this project. Cross-validation experiments were conducted on this annotated data set, and the developed system was successful in identifying the underlying rule structure 43% of the time, and annotating the underlying tokens with recall of .66 and precision of .66. In addition, for 70% of the rules in each test set, the underlying rule structure had been seen in the training set. This suggests that the hypothesis that rules can only be expressed in a limited number of ways is probable.by Samuel Wang.S.M

    Towards Query-Efficient Black-Box Adversary with Zeroth-Order Natural Gradient Descent

    Full text link
    Despite the great achievements of the modern deep neural networks (DNNs), the vulnerability/robustness of state-of-the-art DNNs raises security concerns in many application domains requiring high reliability. Various adversarial attacks are proposed to sabotage the learning performance of DNN models. Among those, the black-box adversarial attack methods have received special attentions owing to their practicality and simplicity. Black-box attacks usually prefer less queries in order to maintain stealthy and low costs. However, most of the current black-box attack methods adopt the first-order gradient descent method, which may come with certain deficiencies such as relatively slow convergence and high sensitivity to hyper-parameter settings. In this paper, we propose a zeroth-order natural gradient descent (ZO-NGD) method to design the adversarial attacks, which incorporates the zeroth-order gradient estimation technique catering to the black-box attack scenario and the second-order natural gradient descent to achieve higher query efficiency. The empirical evaluations on image classification datasets demonstrate that ZO-NGD can obtain significantly lower model query complexities compared with state-of-the-art attack methods.Comment: accepted by AAAI 202

    EMShepherd: Detecting Adversarial Samples via Side-channel Leakage

    Full text link
    Deep Neural Networks (DNN) are vulnerable to adversarial perturbations-small changes crafted deliberately on the input to mislead the model for wrong predictions. Adversarial attacks have disastrous consequences for deep learning-empowered critical applications. Existing defense and detection techniques both require extensive knowledge of the model, testing inputs, and even execution details. They are not viable for general deep learning implementations where the model internal is unknown, a common 'black-box' scenario for model users. Inspired by the fact that electromagnetic (EM) emanations of a model inference are dependent on both operations and data and may contain footprints of different input classes, we propose a framework, EMShepherd, to capture EM traces of model execution, perform processing on traces and exploit them for adversarial detection. Only benign samples and their EM traces are used to train the adversarial detector: a set of EM classifiers and class-specific unsupervised anomaly detectors. When the victim model system is under attack by an adversarial example, the model execution will be different from executions for the known classes, and the EM trace will be different. We demonstrate that our air-gapped EMShepherd can effectively detect different adversarial attacks on a commonly used FPGA deep learning accelerator for both Fashion MNIST and CIFAR-10 datasets. It achieves a 100% detection rate on most types of adversarial samples, which is comparable to the state-of-the-art 'white-box' software-based detectors

    Variable pitch approach for performance improving of straight-bladed VAWT at rated tip speed ratio

    Get PDF
    This paper presents a new variable pitch (VP) approach to increase the peak power coefficient of the straight-bladed vertical-axis wind turbine (VAWT), by widening the azimuthal angle band of the blade with the highest aerodynamic torque, instead of increasing the highest torque. The new VP-approach provides a curve of pitch angle designed for the blade operating at the rated tip speed ratio (TSR) corresponding to the peak power coefficient of the fixed pitch (FP)-VAWT. The effects of the new approach are exploited by using the double multiple stream tubes (DMST) model and Prandtl’s mathematics to evaluate the blade tip loss. The research describes the effects from six aspects, including the lift, drag, angle of attack (AoA), resultant velocity, torque, and power output, through a comparison between VP-VAWTs and FP-VAWTs working at four TSRs: 4, 4.5, 5, and 5.5. Compared with the FP-blade, the VP-blade has a wider azimuthal zone with the maximum AoA, lift, drag, and torque in the upwind half-cycle, and yields the two new larger maximum values in the downwind half-cycle. The power distribution in the swept area of the turbine changes from an arched shape of the FP-VAWT into the rectangular shape of the VP-VAWT. The new VP-approach markedly widens the highest-performance zone of the blade in a revolution, and ultimately achieves an 18.9% growth of the peak power coefficient of the VAWT at the optimum TSR. Besides achieving this growth, the new pitching method will enhance the performance at TSRs that are higher than current optimal values, and an increase of torque is also generated
    • …
    corecore